26 research outputs found

    Sliding to predict: vision-based beating heart motion estimation by modeling temporal interactions

    Get PDF
    Purpose: Technical advancements have been part of modern medical solutions as they promote better surgical alternatives that serve to the benefit of patients. Particularly with cardiovascular surgeries, robotic surgical systems enable surgeons to perform delicate procedures on a beating heart, avoiding the complications of cardiac arrest. This advantage comes with the price of having to deal with a dynamic target which presents technical challenges for the surgical system. In this work, we propose a solution for cardiac motion estimation. Methods: Our estimation approach uses a variational framework that guarantees preservation of the complex anatomy of the heart. An advantage of our approach is that it takes into account different disturbances, such as specular reflections and occlusion events. This is achieved by performing a preprocessing step that eliminates the specular highlights and a predicting step, based on a conditional restricted Boltzmann machine, that recovers missing information caused by partial occlusions. Results: We carried out exhaustive experimentations on two datasets, one from a phantom and the other from an in vivo procedure. The results show that our visual approach reaches an average minima in the order of magnitude of 10-7 while preserving the heart’s anatomical structure and providing stable values for the Jacobian determinant ranging from 0.917 to 1.015. We also show that our specular elimination approach reaches an accuracy of 99% compared to a ground truth. In terms of prediction, our approach compared favorably against two well-known predictors, NARX and EKF, giving the lowest average RMSE of 0.071. Conclusion: Our approach avoids the risks of using mechanical stabilizers and can also be effective for acquiring the motion of organs other than the heart, such as the lung or other deformable objects.Peer ReviewedPostprint (published version

    V-ANFIS for Dealing with Visual Uncertainty for Force Estimation in Robotic Surgery

    Get PDF
    Accurate and robust estimation of applied forces in Robotic-Assisted Minimally Invasive Surgery is a very challenging task. Many vision-based solutions attempt to estimate the force by measuring the surface deformation after contacting the surgical tool. However, visual uncertainty, due to tool occlusion, is a major concern and can highly affect the results' precision. In this paper, a novel design of an adaptive neuro-fuzzy inference strategy with a voting step (V-ANFIS) is used to accommodate with this loss of information. Experimental results show a significant accuracy improvement from 50% to 77% with respect to other proposals.Peer ReviewedPostprint (published version

    Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach

    Get PDF
    Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft

    Sliding to predict: vision-based beating heart motion estimation by modeling temporal interactions.

    Get PDF
    PURPOSE: Technical advancements have been part of modern medical solutions as they promote better surgical alternatives that serve to the benefit of patients. Particularly with cardiovascular surgeries, robotic surgical systems enable surgeons to perform delicate procedures on a beating heart, avoiding the complications of cardiac arrest. This advantage comes with the price of having to deal with a dynamic target which presents technical challenges for the surgical system. In this work, we propose a solution for cardiac motion estimation. METHODS: Our estimation approach uses a variational framework that guarantees preservation of the complex anatomy of the heart. An advantage of our approach is that it takes into account different disturbances, such as specular reflections and occlusion events. This is achieved by performing a preprocessing step that eliminates the specular highlights and a predicting step, based on a conditional restricted Boltzmann machine, that recovers missing information caused by partial occlusions. RESULTS: We carried out exhaustive experimentations on two datasets, one from a phantom and the other from an in vivo procedure. The results show that our visual approach reaches an average minima in the order of magnitude of [Formula: see text] while preserving the heart's anatomical structure and providing stable values for the Jacobian determinant ranging from 0.917 to 1.015. We also show that our specular elimination approach reaches an accuracy of 99% compared to a ground truth. In terms of prediction, our approach compared favorably against two well-known predictors, NARX and EKF, giving the lowest average RMSE of 0.071. CONCLUSION: Our approach avoids the risks of using mechanical stabilizers and can also be effective for acquiring the motion of organs other than the heart, such as the lung or other deformable objects

    Sight to touch: 3D diffeomorphic deformation recovery with mixture components for perceiving forces in robotic-assisted surgery

    Get PDF
    Robotic-assisted minimally invasive surgical sys-tems suffer from one major limitation which is the lack of interaction forces feedback. The restricted sense of touch hinders the surgeons’ performance and reduces their dexterity and precision during a procedure. In this work, we present a sensory substitution approach that relies on visual stimuli to transmit the tool-tissue interaction forces to the operating surgeon. Our approach combines a 3D diffeomorphic defor-mation mapping with a generative model to precisely label the force level. The main highlights of our approach are that the use of diffeomorphic transformation ensures anatomical structure preservation and the label assignment is based on a parametric form of several mixture elements. We performed experimentations on both ex-vivo and in-vivo datasets and offer careful numerical results evaluating our approach. The results show that our solution has an error measure less than 1mm in all directions and an average labeling error of 2.05%. It can also be applicable to other scenarios that require force feedback such as microsurgery, knot tying or needle-based procedures.Peer ReviewedPostprint (author's final draft

    Beyond Supervised Classification: Extreme Minimal Supervision with the Graph 1-Laplacian

    Get PDF
    We consider the task of classifying when an extremely reduced amount of labelled data is available. This problem is of a great interest, in several real-world problems, as obtaining large amounts of labelled data is expensive and time consuming. We present a novel semi-supervised framework for multi-class classification that is based on the normalised and non-smooth graph 1-Laplacian. Our transductive framework is framed under a novel functional with carefully selected class priors - that enforces a sufficiently smooth solution that strengthens the intrinsic relation between the labelled and unlabelled data. We demonstrate through extensive experimental results on large datasets CIFAR-10 and ChestX-ray14, that our method outperforms classic methods and readily competes with recent deep-learning approaches

    Sensory substitution for force feedback recovery: A perception experimental study

    Get PDF
    Robotic-assisted surgeries are commonly used today as a more efficient alternative to traditional surgical options. Both surgeons and patients benefit from those systems, as they offer many advantages, including less trauma and blood loss, fewer complications, and better ergonomics. However, a remaining limitation of currently available surgical systems is the lack of force feedback due to the teleoperation setting, which prevents direct interaction with the patient. Once the force information is obtained by either a sensing device or indirectly through vision-based force estimation, a concern arises on how to transmit this information to the surgeon. An attractive alternative is sensory substitution, which allows transcoding information from one sensory modality to present it in a different sensory modality. In the current work, we used visual feedback to convey interaction forces to the surgeon. Our overarching goal was to address the following question: How should interaction forces be displayed to support efficient comprehension by the surgeon without interfering with the surgeon’s perception and workflow during surgery? Until now, the use the visual modality for force feedback has not been carefully evaluated. For this reason, we conducted an experimental study with two aims: (1) to demonstrate the potential benefits of using this modality and (2) to understand the surgeons’ perceptual preferences. The results derived from our study of 28 surgeons revealed a strong positive acceptance of the users (96%) using this modality. Moreover, we found that for surgeons to easily interpret the information, their mental model must be considered, meaning that the design of the visualizations should fit the perceptual and cognitive abilities of the end user. To our knowledge, this is the first time that these principles have been analyzed for exploring sensory substitution in medical robotics. Finally, we provide user-centered recommendations for the design of visual displays for robotic surgical systems.Peer ReviewedPostprint (author's final draft

    Sensorless force estimation using a neuro-vision-based approach for robotic-assisted surgery

    Get PDF
    This paper addresses the issue of lack of force feedback in robotic-assisted minimally invasive surgeries. Force is an important measure for surgeons in order to prevent intra-operative complications and tissue damage. Thus, an innovative neuro-vision based force estimation approach is proposed. Tissue surface displacement is first measured via minimization of an energy functional. A neuro approach is then used to establish a geometric-visual relation and estimate the applied force. The proposed approach eliminates the need of add-on sensors, carrying out biocompatibility studies and is applicable to tissues of any shape. Moreover, we provided an improvement from 15.14% to 56.16% over other approaches which demonstrate the potential of our proposalPeer ReviewedPostprint (author's final draft

    Sliding to predict: vision-based beating heart motion estimation by modeling temporal interactions

    No full text
    Purpose: Technical advancements have been part of modern medical solutions as they promote better surgical alternatives that serve to the benefit of patients. Particularly with cardiovascular surgeries, robotic surgical systems enable surgeons to perform delicate procedures on a beating heart, avoiding the complications of cardiac arrest. This advantage comes with the price of having to deal with a dynamic target which presents technical challenges for the surgical system. In this work, we propose a solution for cardiac motion estimation. Methods: Our estimation approach uses a variational framework that guarantees preservation of the complex anatomy of the heart. An advantage of our approach is that it takes into account different disturbances, such as specular reflections and occlusion events. This is achieved by performing a preprocessing step that eliminates the specular highlights and a predicting step, based on a conditional restricted Boltzmann machine, that recovers missing information caused by partial occlusions. Results: We carried out exhaustive experimentations on two datasets, one from a phantom and the other from an in vivo procedure. The results show that our visual approach reaches an average minima in the order of magnitude of 10-7 while preserving the heart’s anatomical structure and providing stable values for the Jacobian determinant ranging from 0.917 to 1.015. We also show that our specular elimination approach reaches an accuracy of 99% compared to a ground truth. In terms of prediction, our approach compared favorably against two well-known predictors, NARX and EKF, giving the lowest average RMSE of 0.071. Conclusion: Our approach avoids the risks of using mechanical stabilizers and can also be effective for acquiring the motion of organs other than the heart, such as the lung or other deformable objects.Peer Reviewe
    corecore